72 research outputs found

    Semantic A-translation and Super-consistency entail Classical Cut Elimination

    Get PDF
    We show that if a theory R defined by a rewrite system is super-consistent, the classical sequent calculus modulo R enjoys the cut elimination property, which was an open question. For such theories it was already known that proofs strongly normalize in natural deduction modulo R, and that cut elimination holds in the intuitionistic sequent calculus modulo R. We first define a syntactic and a semantic version of Friedman's A-translation, showing that it preserves the structure of pseudo-Heyting algebra, our semantic framework. Then we relate the interpretation of a theory in the A-translated algebra and its A-translation in the original algebra. This allows to show the stability of the super-consistency criterion and the cut elimination theorem

    A simple proof that super consistency implies cut elimination

    No full text
    International audienceWe give a simple and direct proof that super-consistency implies the cut elimination property in deduction modulo. This proof can be seen as a simpli cation of the proof that super-consistency implies proof normalization. It also takes ideas from the semantic proofs of cut elimination that proceed by proving the completeness of the cut-free calculus. As an application, we compare our work with the cut elimination theorems in higher-order logic that involve V-complexes

    Normalization by Completeness with Heyting Algebras

    Get PDF
    International audienceUsual normalization by evaluation techniques have a strong relationship with completeness with respect to Kripke structures. But Kripke structures is not the only semantics that ts intuitionistic logic: Heyting algebras are a more algebraic alternative.In this paper, we focus on this less investigated area: how completeness with respect to Heyting algebras generate a normalization algorithm for a natural deduction calculus, in the propositional fragment. Our main contributions is that we prove in a direct way completeness of natural deduction with respect to Heyting algebras, that the underlying algorithm natively deals with disjunction, that we formalized those proofs in Coq, and give an extracted algorithm

    A syntactic soundness proof for free-variable tableaux with on-the-fly Skolemization

    No full text
    We prove the syntactic soundness of classical tableaux with free variables and on-the-fly Skolemization. Soundness proofs are usually built from semantic arguments, and this is to our knowledge, the first proof that appeals to syntactic means. We actually prove the soundness property with respect to cut-free sequent calculus. This requires great care because of the additional liberty in freshness checking allowed by the use of Skolem terms. In contrast to semantic soundness, we gain the possibility to state a cut elimination theorem for sequent calculus, under the proviso that completeness of the method holds. We believe that such techniques can be applied to tableaux in other logics as well

    Dependency Pairs Termination in Dependent Type Theory Modulo Rewriting

    Get PDF
    Dependency pairs are a key concept at the core of modern automated termination provers for first-order term rewriting systems. In this paper, we introduce an extension of this technique for a large class of dependently-typed higher-order rewriting systems. This extends previous results by Wahlstedt on the one hand and the first author on the other hand to strong normalization and non-orthogonal rewriting systems. This new criterion is implemented in the type-checker Dedukti

    A Simple Proof That Super-Consistency Implies Cut Elimination

    Full text link
    We give a simple and direct proof that super-consistency implies the cut elimination property in deduction modulo. This proof can be seen as a simplification of the proof that super-consistency implies proof normalization. It also takes ideas from the semantic proofs of cut elimination that proceed by proving the completeness of the cut-free calculus. As an application, we compare our work with the cut elimination theorems in higher-order logic that involve V-complexes

    Orthogonality and Boolean Algebras for Deduction Modulo

    Get PDF
    Originating from automated theorem proving, deduction modulo removes computational arguments from proofs by interleaving rewriting with the deduction process. From a proof-theoretic point of view, deduction modulo defines a generic notion of cut that applies to any first-order theory presented as a rewrite system. In such a setting, one can prove cut-elimination theorems that apply to many theories, provided they verify some generic criterion. Pre-Heyting algebras are a generalization of Heyting algebras which are used by Dowek to provide a semantic intuitionistic criterion called superconsistency for generic cut-elimination. This paper uses pre-Boolean algebras (generalizing Boolean algebras) and biorthogonality to prove a generic cut-elimination theorem for the classical sequent calculus modulo. It gives this way a novel application of reducibility candidates techniques, avoiding the use of proof-terms and simplifying the arguments

    ALICe: A Framework to Improve Affine Loop Invariant Computation

    No full text
    International audienceA crucial point in program analysis is the computation of loop invariants. Accurate invariants are required to prove properties on a program but they are difficult to compute. Extensive research has been carried out but, to the best of our knowledge, no benchmark has ever been developed to compare algorithms and tools. We present ALICe, a toolset to compare automatic computation techniques of affine loop scalar invariants. It comes with a benchmark that we built using 102 test cases which we found in the loop invariant bibliography, and interfaces with three analysis programs, that rely on different techniques: Aspic, ISL and PIPS. Conversion tools are provided to handle format heterogeneity of these programs. Experimental results show the importance of model coding and the poor performances of PIPS on concurrent loops. To tackle these issues, we use two model restructurations techniques whose correctness is proved in Coq, and discuss the improvements realized

    Computing Invariants with Transformers: Experimental Scalability and Accuracy

    Get PDF
    International audienceUsing abstract interpretation, invariants are usually obtained by solving iteratively a system of equations linking preconditions according to program statements. However, it is also possible to abstract first the statements as transformers, and then propagate the preconditions using the transformers. The second approach is modular because procedures and loops can be abstracted once and for all, avoiding an iterative resolution over the call graph and all the control flow graphs. However, the transformer approach based on polyhedral abstract domains encurs two penalties: some invariant accuracy may be lost when computing transformers, and the execution time may increase exponentially because the dimension of a transformer is twice the dimension of a precondition. The purposes of this article are 1) to measure the benefits of the modular approach and its drawbacks in terms of execution time and accuracy using significant examples and a newly developed benchmark for loop invariant analysis, ALICe, 2) to present a new technique designed to reduce the accuracy loss when computing transformers, 3) to evaluate experimentally the accuracy gains this new technique and other previously discussed ones provide with ALICe test cases and 4) to compare the executions times and accuracies of different tools, ASPIC, ISL, PAGAI and PIPS. Our results suggest that the transformer-based approach used in PIPS, once improved with transformer lists, is as accurate as the other tools when dealing with the ALICe benchmark. Its modularity nevertheless leads to shorter execution times when dealing with nested loops and procedure calls found in real applications

    Preservation of Lyapunov-Theoretic Proofs: From Real to Floating-Point Arithmetic

    No full text
    In a paper, Feron presents how Lyapunov-theoretic proofs of stability can be migrated toward computer-readable and verifiable certificates of control software behavior by relying of Floyd's and Hoare's proof system. However, Lyapunov-theoretic proofs are addressed towards exact, real arithmetic and do not accurately represent the behavior of realistic programs run with machine arithmetic. We address the issue of preserving those proofs in presence of rounding errors resulting from the use of floating-point arithmetic: we present an automatic tool, based on a theoretical framework the soundness of which is proved in Coq, that translates Feron's proof invariants on real arithmetic to similar invariants on floating-point numbers, and preserves the proof structure. We show how our methodology allows to verify whether stability invariants still hold for the concrete implementation of the controller. We study in details the application of our tool to the open-loop system of Feron's paper and show that stability is preserved out of the box. We also translate Feron's proof for the closed-loop system, and discuss the conditions under which the system remains stable
    • 

    corecore